When data is streaming from multiple sources, conventional training methods update model weights often assuming the same level of reliability for each source; that is: a model does not consider data quality of each source during training. In many applications, sources can have varied levels of noise or corruption that has negative effects on the learning of a robust deep learning model. A key issue is that the quality of data or labels for individual sources is often not available during training and could vary over time. Our solution to this problem is to consider the mistakes made while training on data originating from sources and utilise this to create a perceived data quality for each source. This paper demonstrates a straight-forward and novel technique that can be applied to any gradient descent optimiser: Update model weights as a function of the perceived reliability of data sources within a wider data set. The algorithm controls the plasticity of a given model to weight updates based on the history of losses from individual data sources. We show that applying this technique can significantly improve model performance when trained on a mixture of reliable and unreliable data sources, and maintain performance when models are trained on data sources that are all considered reliable. All code to reproduce this work's experiments and implement the algorithm in the reader's own models is made available.
translated by 谷歌翻译
搅拌是痴呆症患病率高的神经精神症状之一,可以对日常生活(ADL)的活动产生负面影响,以及个体的独立性。检测搅拌剧集可以帮助提前及时地提供痴呆症(PLWD)的人们。分析搅拌剧集还将有助于识别可修改的因素,例如环境温度和睡眠中的睡眠,导致个体搅动。这项初步研究提出了一种监督学习模型,用于分析PLWD中搅动风险,使用家庭监控数据。家庭监控数据包括来自2019年4月2021年4月至6月至6月20日至6月20日至6月至6月间PLWD的运动传感器,生理测量和厨房电器的使用。我们应用经常性的深度学习模型,以识别验证和记录的临床监测团队验证和记录的搅拌集团。我们提出了评估拟议模型的功效的实验。拟议的模型平均召回79.78%的召回,27.66%的精确度和37.64%的F1分数在采用最佳参数时得分,表明识别搅动事件的良好能力。我们还使用机器学习模型讨论使用连续监测数据分析行为模式,并探索临床适用性以及敏感性和特异性监控应用之间的选择。
translated by 谷歌翻译
声学和视觉感测可以在人操纵时支持容器重量和其内容量的非接触式估计。但是,Opaquent和透明度(包括容器和内容的透明度)以及材料,形状和尺寸的可变性都会使这个问题具有挑战性。在本文中,我们向基准方法提出了一个开放框架,用于估计容器的容量,以及其内容的类型,质量和量。该框架包括数据集,明确定义的任务和性能测量,基线和最先进的方法,以及对这些方法的深入比较分析。使用单独的音频或音频和视觉数据的组合使用具有音频的神经网络的深度学习,用于分类内容的类型和数量,无论是独立的还是共同。具有视觉数据的回归和几何方法是优选的,以确定容器的容量。结果表明,使用仅使用Audio作为输入模块的方法对内容类型和级别进行分类,可分别获得加权平均F1-得分高达81%和97%。估计仅具有视觉视觉的近似接近和填充质量的容器容量,具有视听,多级算法达到65%的加权平均容量和质量分数。
translated by 谷歌翻译
Artificial intelligence (AI) in the form of deep learning bears promise for drug discovery and chemical biology, $\textit{e.g.}$, to predict protein structure and molecular bioactivity, plan organic synthesis, and design molecules $\textit{de novo}$. While most of the deep learning efforts in drug discovery have focused on ligand-based approaches, structure-based drug discovery has the potential to tackle unsolved challenges, such as affinity prediction for unexplored protein targets, binding-mechanism elucidation, and the rationalization of related chemical kinetic properties. Advances in deep learning methodologies and the availability of accurate predictions for protein tertiary structure advocate for a $\textit{renaissance}$ in structure-based approaches for drug discovery guided by AI. This review summarizes the most prominent algorithmic concepts in structure-based deep learning for drug discovery, and forecasts opportunities, applications, and challenges ahead.
translated by 谷歌翻译
We present temporally layered architecture (TLA), a biologically inspired system for temporally adaptive distributed control. TLA layers a fast and a slow controller together to achieve temporal abstraction that allows each layer to focus on a different time-scale. Our design is biologically inspired and draws on the architecture of the human brain which executes actions at different timescales depending on the environment's demands. Such distributed control design is widespread across biological systems because it increases survivability and accuracy in certain and uncertain environments. We demonstrate that TLA can provide many advantages over existing approaches, including persistent exploration, adaptive control, explainable temporal behavior, compute efficiency and distributed control. We present two different algorithms for training TLA: (a) Closed-loop control, where the fast controller is trained over a pre-trained slow controller, allowing better exploration for the fast controller and closed-loop control where the fast controller decides whether to "act-or-not" at each timestep; and (b) Partially open loop control, where the slow controller is trained over a pre-trained fast controller, allowing for open loop-control where the slow controller picks a temporally extended action or defers the next n-actions to the fast controller. We evaluated our method on a suite of continuous control tasks and demonstrate the advantages of TLA over several strong baselines.
translated by 谷歌翻译
Large Language Models (LLMs) have been the subject of active research, significantly advancing the field of Natural Language Processing (NLP). From BERT to BLOOM, LLMs have surpassed state-of-the-art results in various natural language tasks such as question answering, summarization, and text generation. Many ongoing efforts focus on understanding LLMs' capabilities, including their knowledge of the world, syntax, and semantics. However, extending the textual prowess of LLMs to symbolic reasoning has been slow and predominantly focused on tackling problems related to the mathematical field. In this paper, we explore the use of LLMs for automated planning - a branch of AI concerned with the realization of action sequences (plans) to achieve a goal, typically executed by intelligent agents, autonomous robots, and unmanned vehicles. We introduce Plansformer; an LLM fine-tuned on planning problems and capable of generating plans with favorable behavior in terms of correctness and length with reduced knowledge-engineering efforts. We also demonstrate the adaptability of Plansformer in solving different planning domains with varying complexities, owing to the transfer learning abilities of LLMs. For one configuration of Plansformer, we achieve ~97% valid plans, out of which ~95% are optimal for Towers of Hanoi - a puzzle-solving domain.
translated by 谷歌翻译
This paper describes Waymo's Collision Avoidance Testing (CAT) methodology: a scenario-based testing method that evaluates the safety of the Waymo Driver Automated Driving Systems' (ADS) intended functionality in conflict situations initiated by other road users that require urgent evasive maneuvers. Because SAE Level 4 ADS are responsible for the dynamic driving task (DDT), when engaged, without immediate human intervention, evaluating a Level 4 ADS using scenario-based testing is difficult due to the potentially infinite number of operational scenarios in which hazardous situations may unfold. To that end, in this paper we first describe the safety test objectives for the CAT methodology, including the collision and serious injury metrics and the reference behavior model representing a non-impaired eyes on conflict human driver used to form an acceptance criterion. Afterward, we introduce the process for identifying potentially hazardous situations from a combination of human data, ADS testing data, and expert knowledge about the product design and associated Operational Design Domain (ODD). The test allocation and execution strategy is presented next, which exclusively utilize simulations constructed from sensor data collected on a test track, real-world driving, or from simulated sensor data. The paper concludes with the presentation of results from applying CAT to the fully autonomous ride-hailing service that Waymo operates in San Francisco, California and Phoenix, Arizona. The iterative nature of scenario identification, combined with over ten years of experience of on-road testing, results in a scenario database that converges to a representative set of responder role scenarios for a given ODD. Using Waymo's virtual test platform, which is calibrated to data collected as part of many years of ADS development, the CAT methodology provides a robust and scalable safety evaluation.
translated by 谷歌翻译
We consider the problem of predictive monitoring (PM), i.e., predicting at runtime the satisfaction of a desired property from the current system's state. Due to its relevance for runtime safety assurance and online control, PM methods need to be efficient to enable timely interventions against predicted violations, while providing correctness guarantees. We introduce \textit{quantitative predictive monitoring (QPM)}, the first PM method to support stochastic processes and rich specifications given in Signal Temporal Logic (STL). Unlike most of the existing PM techniques that predict whether or not some property $\phi$ is satisfied, QPM provides a quantitative measure of satisfaction by predicting the quantitative (aka robust) STL semantics of $\phi$. QPM derives prediction intervals that are highly efficient to compute and with probabilistic guarantees, in that the intervals cover with arbitrary probability the STL robustness values relative to the stochastic evolution of the system. To do so, we take a machine-learning approach and leverage recent advances in conformal inference for quantile regression, thereby avoiding expensive Monte-Carlo simulations at runtime to estimate the intervals. We also show how our monitors can be combined in a compositional manner to handle composite formulas, without retraining the predictors nor sacrificing the guarantees. We demonstrate the effectiveness and scalability of QPM over a benchmark of four discrete-time stochastic processes with varying degrees of complexity.
translated by 谷歌翻译
5G及以后的移动网络将以前所未有的规模支持异质用例,从而要求自动控制和优化针对单个用户需求的网络功能。当前的蜂窝体系结构不可能对无线电访问网络(RAN)进行这种细粒度控制。为了填补这一空白,开放式运行范式及其规范引入了一个带有抽象的开放体系结构,该架构可以启用闭环控制并提供数据驱动和智能优化RAN在用户级别上。这是通过在网络边缘部署在近实时RAN智能控制器(接近RT RIC)上的自定义RAN控制应用程序(即XAPP)获得的。尽管有这些前提,但截至今天,研究界缺乏用于构建数据驱动XAPP的沙箱,并创建大型数据集以有效的AI培训。在本文中,我们通过引入NS-O-RAN来解决此问题,NS-O-RAN是一个软件框架,该框架将现实世界中的生产级近距离RIC与NS-3上的基于3GPP的模拟环境集成在一起,从而实现了XAPPS和XAPPS的开发自动化的大规模数据收集和深入强化学习驱动的控制策略的测试,以在用户级别的优化中进行优化。此外,我们提出了第一个特定于用户的O-RAN交通转向(TS)智能移交框架。它使用随机的合奏混合物,结合了最先进的卷积神经网络体系结构,以最佳地为网络中的每个用户分配服务基站。我们的TS XAPP接受了NS-O-RAN收集的超过4000万个数据点的培训,该数据点在近距离RIC上运行,并控制其基站。我们在大规模部署中评估了性能,这表明基于XAPP的交换可以使吞吐量和频谱效率平均比传统的移交启发式方法提高50%,而动机性开销较少。
translated by 谷歌翻译
我们定义了一个新颖的神经符号框架,论证奖励学习,该奖励学习将基于偏好的论点与现有方法结合了从人类反馈中加强学习的方法。我们的方法通过概括人类的偏好,减轻用户的负担并增加奖励模型的鲁棒性来改善先前的工作。我们通过许多实验证明了这一点。
translated by 谷歌翻译